Anthropic’s AI Models Simulate DeFi Hacks, Expose Smart Contract Flaws
Advanced AI models from Anthropic have demonstrated the ability to autonomously exploit vulnerabilities in blockchain smart contracts, revealing significant financial risks. Researchers tested Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 using the SCONE-bench benchmark, which includes 405 real contracts exploited between 2020 and 2025. The results highlight that AI-driven exploits are not only feasible but also pose a substantial economic threat.
The study simulated attacks on contracts post-dating March 2025, with AI agents extracting $4.6 million in simulated funds. This underscores the shrinking window developers have to patch vulnerabilities before exploitation. The focus on utility over speculative tokens becomes critical as AI exposes risks in projects lacking real-world value.
Beyond historical contracts, Sonnet 4.5 and GPT-5 analyzed 2,849 newly deployed contracts, further emphasizing the scalability of AI-driven threats. Blockchain security practices must evolve to counter these rapidly advancing capabilities.